105 research outputs found

    Remembered landmarks enhance the precision of path integration

    Get PDF
    When navigating by path integration, knowledge of one’s position becomes increasingly uncertain as one walks from a known location. This uncertainty decreases if one perceives a known landmark location nearby. We hypothesized that remembering landmarks might serve a similar purpose for path integration as directly perceiving them. If this is true, walking near a remembered landmark location should enhance response consistency in path integration tasks. To test this, we asked participants to view a target and then attempt to walk to it without vision. Some participants saw the target plus a landmark during the preview. Compared with no-landmark trials, response consistency nearly doubled when participants passed near the remembered landmark location. Similar results were obtained when participants could audibly perceive the landmark while walking. A control experiment ruled out perceptual context effects during the preview. We conclude that remembered landmarks can enhance path integration even though they are not directly perceived

    Hyper-arousal decreases human visual thresholds

    Get PDF
    Arousal has long been known to influence behavior and serves as an underlying component of cognition and consciousness. However, the consequences of hyper-arousal for visual perception remain unclear. The present study evaluates the impact of hyper-arousal on two aspects of visual sensitivity: visual stereoacuity and contrast thresholds. Sixty-eight participants participated in two experiments. Thirty-four participants were randomly divided into two groups in each experiment: Arousal Stimulation or Sham Control. The Arousal Stimulation group underwent a 50-second cold pressor stimulation (immersing the foot in 0–2° C water), a technique known to increase arousal. In contrast, the Sham Control group immersed their foot in room temperature water. Stereoacuity thresholds (Experiment 1) and contrast thresholds (Experiment 2) were measured before and after stimulation. The Arousal Stimulation groups demonstrated significantly lower stereoacuity and contrast thresholds following cold pressor stimulation, whereas the Sham Control groups showed no difference in thresholds. These results provide the first evidence that hyper-arousal from sensory stimulation can lower visual thresholds. Hyper-arousal\u27s ability to decrease visual thresholds has important implications for survival, sports, and everyday life

    Non-sensory inputs to angular path integration

    Get PDF
    Non-sensory (cognitive) inputs can play a powerful role in monitoring one\u27s self-motion. Previously, we showed that access to spatial memory dramatically increases response precision in an angular self-motion updating task [1]. Here, we examined whether spatial memory also enhances a particular type of self-motion updating - angular path integration. Angular path integration refers to the ability to maintain an estimate of self-location after a rotational displacement by integrating internally-generated (idiothetic) self-motion signals over time. It was hypothesized that remembered spatial frameworks derived from vision and spatial language should facilitate angular path integration by decreasing the uncertainty of self-location estimates. To test this we implemented a whole-body rotation paradigm with passive, non-visual body rotations (ranging 40 degrees -140 degrees ) administered about the yaw axis. Prior to the rotations, visual previews (Experiment 1) and verbal descriptions (Experiment 2) of the surrounding environment were given to participants. Perceived angular displacement was assessed by open-loop pointing to the origin (0 degrees ). We found that within-subject response precision significantly increased when participants were provided a spatial context prior to whole-body rotations. The present study goes beyond our previous findings by first establishing that memory of the environment enhances the processing of idiothetic self-motion signals. Moreover, we show that knowledge of one\u27s immediate environment, whether gained from direct visual perception or from indirect experience (i.e., spatial language), facilitates the integration of incoming self-motion signals

    The Underestimation Of Egocentric Distance: Evidence From Frontal Matching Tasks

    Get PDF
    There is controversy over the existence, nature, and cause of error in egocentric distance judgments. One proposal is that the systematic biases often found in explicit judgments of egocentric distance along the ground may be related to recently observed biases in the perceived declination of gaze (Durgin & Li, Attention, Perception, & Psychophysics, in press), To measure perceived egocentric distance nonverbally, observers in a field were asked to position themselves so that their distance from one of two experimenters was equal to the frontal distance between the experimenters. Observers placed themselves too far away, consistent with egocentric distance underestimation. A similar experiment was conducted with vertical frontal extents. Both experiments were replicated in panoramic virtual reality. Perceived egocentric distance was quantitatively consistent with angular bias in perceived gaze declination (1.5 gain). Finally, an exocentric distance-matching task was contrasted with a variant of the egocentric matching task. The egocentric matching data approximate a constant compression of perceived egocentric distance with a power function exponent of nearly 1; exocentric matches had an exponent of about 0.67. The divergent pattern between egocentric and exocentric matches suggests that they depend on different visual cues

    Sensory substitution for force feedback recovery: A perception experimental study

    Get PDF
    Robotic-assisted surgeries are commonly used today as a more efficient alternative to traditional surgical options. Both surgeons and patients benefit from those systems, as they offer many advantages, including less trauma and blood loss, fewer complications, and better ergonomics. However, a remaining limitation of currently available surgical systems is the lack of force feedback due to the teleoperation setting, which prevents direct interaction with the patient. Once the force information is obtained by either a sensing device or indirectly through vision-based force estimation, a concern arises on how to transmit this information to the surgeon. An attractive alternative is sensory substitution, which allows transcoding information from one sensory modality to present it in a different sensory modality. In the current work, we used visual feedback to convey interaction forces to the surgeon. Our overarching goal was to address the following question: How should interaction forces be displayed to support efficient comprehension by the surgeon without interfering with the surgeon’s perception and workflow during surgery? Until now, the use the visual modality for force feedback has not been carefully evaluated. For this reason, we conducted an experimental study with two aims: (1) to demonstrate the potential benefits of using this modality and (2) to understand the surgeons’ perceptual preferences. The results derived from our study of 28 surgeons revealed a strong positive acceptance of the users (96%) using this modality. Moreover, we found that for surgeons to easily interpret the information, their mental model must be considered, meaning that the design of the visualizations should fit the perceptual and cognitive abilities of the end user. To our knowledge, this is the first time that these principles have been analyzed for exploring sensory substitution in medical robotics. Finally, we provide user-centered recommendations for the design of visual displays for robotic surgical systems.Peer ReviewedPostprint (author's final draft

    Intrinsic frames of reference in haptic spatial learning

    Get PDF
    It has been proposed that spatial reference frames with which object locations are specified in memory are intrinsic to a to-be-remembered spatial layout (intrinsic reference theory). Although this theory has been supported by accumulating evidence, it has only been collected from paradigms in which the entire spatial layout was simultaneously visible to observers. The present study was designed to examine the generality of the theory by investigating whether the geometric structure of a spatial layout (bilateral symmetry) influences selection of spatial reference frames when object locations are sequentially learned through haptic exploration. In two experiments, participants learned the spatial layout solely by touch and performed judgments of relative direction among objects using their spatial memories. Results indicated that the geometric structure can provide a spatial cue for establishing reference frames as long as it is accentuated by explicit instructions (Experiment 1) or alignment with an egocentric orientation (Experiment 2). These results are entirely consistent with those from previous studies in which spatial information was encoded through simultaneous viewing of all object locations, suggesting that the intrinsic reference theory is not specific to a type of spatial memory acquired by the particular learning method but instead generalizes to spatial memories learned through a variety of encoding conditions. In particular, the present findings suggest that spatial memories that follow the intrinsic reference theory function equivalently regardless of the modality in which spatial information is encoded

    Peripheral vision benefits spatial learning by guiding eye movements

    Get PDF
    Free to read at publisher The loss of peripheral vision impairs spatial learning and navigation. However, the mechanisms underlying these impairments remain poorly understood. One advantage of having peripheral vision is that objects in an environment are easily detected and readily foveated via eye movements. The present study examined this potential benefit of peripheral vision by investigating whether competent performance in spatial learning requires effective eye movements. In Experiment 1, participants learned room-sized spatial layouts with or without restriction on direct eye movements to objects. Eye movements were restricted by having participants view the objects through small apertures in front of their eyes. Results showed that impeding effective eye movements made subsequent retrieval of spatial memory slower and less accurate. The small apertures also occluded much of the environmental surroundings, but the importance of this kind of occlusion was ruled out in Experiment 2 by showing that participants exhibited intact learning of the same spatial layouts when luminescent objects were viewed in an otherwise dark room. Together, these findings suggest that one of the roles of peripheral vision in spatial learning is to guide eye movements, highlighting the importance of spatial information derived from eye movements for learning environmental layouts

    Visually directed walking to briefly glimpsed targets is not biased toward fixation location

    Get PDF
    When observers indicate the magnitude of a previously viewed spatial extent by walking without vision to each endpoint, there is little evidence of the perceptual collapse in depth associated with some other methods (eg visual matching). One explanation is that both walking and matching are perceptually mediated, but that the perceived layout is task-dependent. In this view, perceived depth beyond 2 - 3 m is typically distorted by an equidistance effect, whereby the egocentric distances of nonfixated portions of the depth interval are perceptually pulled toward the fixated point. Action-based responses, however, recruit processes that enhance perceptual accuracy as the stimulus configuration is inspected. This predicts that walked indications of egocentric distance performed without vision should exhibit equidistance effects at short exposure durations, but become more accurate at longer exposures. In this paper, two experiments demonstrate that in a well-lit environment there is substantial perceptual anisotropy at near distances (3 - 5 m), but that walked indications of egocentric distance are quite accurate after brief glimpses (150 ms), even when the walking target is not directly fixated. Longer exposures do not increase accuracy. The results are clearly inconsistent with the task-dependent information processing explanation, but do not rule out others in which perception mediates both walking and visual matches

    Knowledge about typical source output influences perceived auditory distance

    Get PDF
    Vocal effort is known to influence the judged distance of speechsound sources. The present research examined whether this influence is due to long-term experience gained prior to the experiment versus short-term experience gained from exposure to speech stimuli earlier in the same experiment. Speech recordings were presented to 192 blindfolded listeners at three levels of vocal output. Even upon the first presentation, shouting voices were reported as appearing farthest, whispered voices closest. This suggests that auditory distance perception can be affected by past experience in a way that does not require explicit comparisons between individual stimuli
    • …
    corecore